311 research outputs found

    Verb Physics: Relative Physical Knowledge of Actions and Objects

    Full text link
    Learning commonsense knowledge from natural language text is nontrivial due to reporting bias: people rarely state the obvious, e.g., "My house is bigger than me." However, while rarely stated explicitly, this trivial everyday knowledge does influence the way people talk about the world, which provides indirect clues to reason about the world. For example, a statement like, "Tyler entered his house" implies that his house is bigger than Tyler. In this paper, we present an approach to infer relative physical knowledge of actions and objects along five dimensions (e.g., size, weight, and strength) from unstructured natural language text. We frame knowledge acquisition as joint inference over two closely related problems: learning (1) relative physical knowledge of object pairs and (2) physical implications of actions when applied to those object pairs. Empirical results demonstrate that it is possible to extract knowledge of actions and objects from language and that joint inference over different types of knowledge improves performance.Comment: 11 pages, published in Proceedings of ACL 201

    MultiTalk: A Highly-Branching Dialog Testbed for Diverse Conversations

    Full text link
    We study conversational dialog in which there are many possible responses to a given history. We present the MultiTalk Dataset, a corpus of over 320,000 sentences of written conversational dialog that balances a high branching factor (10) with several conversation turns (6) through selective branch continuation. We make multiple contributions to study dialog generation in the highly branching setting. In order to evaluate a diverse set of generations, we propose a simple scoring algorithm, based on bipartite graph matching, to optimally incorporate a set of diverse references. We study multiple language generation tasks at different levels of predictive conversation depth, using textual attributes induced automatically from pretrained classifiers. Our culminating task is a challenging theory of mind problem, a controllable generation task which requires reasoning about the expected reaction of the listener.Comment: 7 pages, AAAI-2

    Probing two-path electron quantum interference in strong-field ionization with time-correlation filtering

    Get PDF
    Attosecond dynamics in strong-field tunnel ionization are encoded in intricate holographic patterns in the photoelectron momentum distributions. These patterns show the interference between two or more superposed quantum electron trajectories, which are defined by their ionization times and subsequent evolution in the laser field. We determine the ionization time separation between interfering pairs of electron orbits by performing a differential Fourier analysis on the measured momentum spectrum. We identify electron holograms formed by trajectory pairs whose ionization times are separated by less than a single quarter cycle, between a quarter cycle and half cycle, between a half cycle and three fourths of a cycle, and a full cycle apart. We compare our experimental results to the predictions of the Coulomb quantum orbit strong-field approximation (CQSFA) with significant success. We also time-filter the CQSFA trajectory calculations to demonstrate the validity of the technique on spectra with known time correlations. As a general analysis technique, the filter can be applied to all energy- and angularly resolved data sets to recover time correlations between interfering electron pathways, providing an important tool to analyze any strong-field ionization spectra. Moreover, it is independent of theory and can be applied directly to experiments, without the need of a direct comparison with orbit-based theoretical methods

    Paragraph-level Commonsense Transformers with Recurrent Memory

    Full text link
    Human understanding of narrative texts requires making commonsense inferences beyond what is stated explicitly in the text. A recent model, COMET, can generate such implicit commonsense inferences along several dimensions such as pre- and post-conditions, motivations, and mental states of the participants. However, COMET was trained on commonsense inferences of short phrases, and is therefore discourse-agnostic. When presented with each sentence of a multi-sentence narrative, it might generate inferences that are inconsistent with the rest of the narrative. We present the task of discourse-aware commonsense inference. Given a sentence within a narrative, the goal is to generate commonsense inferences along predefined dimensions, while maintaining coherence with the rest of the narrative. Such large-scale paragraph-level annotation is hard to get and costly, so we use available sentence-level annotations to efficiently and automatically construct a distantly supervised corpus. Using this corpus, we train PARA-COMET, a discourse-aware model that incorporates paragraph-level information to generate coherent commonsense inferences from narratives. PARA-COMET captures both semantic knowledge pertaining to prior world knowledge, and episodic knowledge involving how current events relate to prior and future events in a narrative. Our results show that PARA-COMET outperforms the sentence-level baselines, particularly in generating inferences that are both coherent and novel.Comment: AAAI 202
    • …
    corecore